home *** CD-ROM | disk | FTP | other *** search
- From: "Neuron-Digest Moderator" <neuron-request@cattell.psych.upenn.edu>
- To: Neuron-Distribution: ;
- Subject: Neuron Digest V10 #14 (conferences)
- Reply-To: "Neuron-Request" <neuron-request@cattell.psych.upenn.edu>
- X-Errors-To: "Neuron-Request" <neuron-request@cattell.psych.upenn.edu>
- Organization: University of Pennsylvania
- Date: Wed, 04 Nov 92 10:29:59 EST
- Message-ID: <16868.720890999@cattell.psych.upenn.edu>
- Sender: marvit@cattell.psych.upenn.edu
-
- Neuron Digest Wednesday, 4 Nov 1992
- Volume 10 : Issue 14
-
- Today's Topics:
- NIPS*92 WORKSHOP PROGRAM
-
-
- Send submissions, questions, address maintenance, and requests for old
- issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
- available from cattell.psych.upenn.edu (130.91.68.31). Back issues
- requested by mail will eventually be sent, but may take a while.
-
- ----------------------------------------------------------------------
-
- Subject: NIPS*92 WORKSHOP PROGRAM
- From: Steve Hanson <jose@tractatus.siemens.com>
- Date: Fri, 02 Oct 92 14:41:40 -0500
-
-
- NIPS*92 WORKSHOP PROGRAM
-
-
- For Further information and queries on workshop please
- respond to WORKSHOP CHAIRPERSONS listed below
-
- =========================================================================
- Character Recognition Workshop
-
- Organizers: C. L. Wilson and M. D. Garris, NIST
-
- Abstract:
- In order to discuss recent developments and research in OCR technology,
- six speakers have been invited to share from their organization's own
- perspective on the subject. Those invited, represent a diversified
- group of organizations actively developing OCR systems. Each speaker
- participated in the first OCR Systems Conference sponsored by the Bureau
- of the Census and hosted by NIST. Therefore, the impressions and results
- gained from the conference should provide significant context for
- discussions.
-
- Invited presentations:
- C. L. Wilson, NIST, "Census OCR Results - Are Neural Networks Better?"
- T. P. Vogl, ERIM, "Effect of Training Set Size on OCR Accuracy"
- C. L. Scofield, Nestor, "Multiple Network Architectures for Handprint
- and Cursive Recognition"
- A. Rao, Kodak, "Directions in OCR Research and Document Understanding
- at Eastman Kodak Company"
- C. J. C. Burges, ATT, "Overview of ATT OCR Technology"
- K. M. Mohiuddin, IBM, "Handwriting OCR Work at IBM Almaden Research Center"
- =========================================================================
- Neural Chips: State of the Art and Perspectives.
-
- Organizer: Eros Pasero pasero@polito.it
-
- Abstract:
- We will encourage lively audience discussion of important issues
- in neural net hardware, such as:
- - - Taxonomy: neural computer, neural processor, neural coprocessor
- - - Digital vs. Analog: limits and benefits of the two approaches.
- - - Algorithms or neural constraints?
- - - Neural chips implemented in universities
- - - Industrial chips (e.g. Intel, AT&T, Synaptics)
- - - Future perspectives
-
- Invited presentations: TBA
- =========================================================================
- Reading the Entrails: Understanding What's Going On Inside a Neural Net
-
- Organizer: Scott E. Fahlman, Carnegie Mellon University
- fahlman@cs.cmu.edu
-
- Abstract:
- Neural networks can be viewed as "black boxes" that learn from examples,
- but often it is useful to figure out what sort of internal knowledge
- representation (or set of "features") is being employed, or how the inputs
- are combined to produce particular outputs. There are many reasons why we
- might seek such understanding: It can tell us which inputs really are
- needed and which are the most critical in producing a given output. It can
- produce explanations that give us more confidence in the network's
- decisions. It can help us to understand how the network would react to new
- situations. It can give us insight into problems with the network's
- performance, stability, or learning behavior. Sometimes, it's just a
- matter of scientific curiosity: if a network does something impressive, we
- want to know how it works.
-
- In this workshop we will survey the available techniques for understanding
- what is happening inside a neural network, both during and after training.
- We plan to have a number of presenters who can describe or demonstrate
- various network-understanding techniques, and who can tell us what useful
- insights were gained using these techniques. Where appropriate, presenters
- will be encouraged to use slides or videotape to illustrate their favorite
- methods.
-
- Among the techniques we will explore are the following: Diagrams of
- weights, unit states, and their trajectories over time. Diagrams of the
- receptive fields of hidden units. How to create meaningful diagrams in
- high-dimensional spaces. Techniques for extracting boolean or fuzzy
- rule-sets from a trained network. Techniques for extracting explanations
- of individual network outputs or decisions. Techniques for describing the
- dynamic behavior of recurrent or time-domain networks. Learning
- pathologies and what they look like.
-
- Invited presentations:
- Still to be determined. The workshop organizer would like to hear from
- potential speakers who would like to give a short presentation of the kind
- described above. Techniques that have proven useful in real-world problems
- are especially sought, as are short videotape segments showing network
- =========================================================================
- COMPUTATIONAL APPROACHES TO BIOLOGICAL SEQUENCE ANALYSIS--
- NEURAL NET VERSUS TRADITIONAL PERPECTIVES
-
- Organizers: Paul Stolorz, Santa Fe Institute and Los Alamos National Lab
- Jude Shavlik, University of Wisconsin.
-
- Abstract:
- There has been a good deal of recent interest in the use of neural
- networks to tackle several important biological sequence analysis
- problems. These problems range from the prediction of protein secondary
- and tertiary structure, to the prediction of DNA protein coding regions
- and regulatory sites, and the identification of homologies. Several
- promising developments have been presented at NIPS meetings in the past
- few years by researchers in the connectionist field.
- Furthermore, a number of structural biologists and chemists have been
- successfully using neural network methods.
-
- The sequence analysis applications encompass a rather large amount of
- neural network territory, ranging from feed forward architectures
- to recurrent nets, Hidden Markov Models and related approaches.
- The aim of this workshop is to review the progress made by these disparate
- strands of endeavor, and to analyze their respective strengths and weaknesses.
- In addition, the intention is to compare the class of neural network methods
- with alternative approaches, both new and traditional. These alternatives
- include knowledge based reasoning, standard non-parametric statistical
- analysis,
- Hidden Markov models and statistical physics methods.
- We hope that by careful consideration and comparison of
- neural nets with several of the alternatives mentioned above, methods can be
- found which are superior to any of the individual techniques developed to date.
- This discussion will be a major focus of the workshop, and we both anticipate
- and encourage vigorous debate.
-
- Invited presentations:
- Jude Shavlik, U. Wisconsin: Learning Important Relations in Protein Structures
- Gary Stormo, U. Colorado: TBA
- Larry Hunter, National Library of Medicine:
- Bayesian Clustering of Protein Structures
- Soren Brunak, DTH: Network analysis of protein structure and the genetic code
- David Haussler, U.C. Santa Cruz: Modeling Protein Families with Hidden
- Markov Models
- Paul Stolorz and Joe Bryngelson, Santa Fe Institute and Los Alamos:
- Information Theory and Statistical Physics in Protein Structures
- =========================================================================
- Statistical Regression Methods and Feedforward Nets
-
- Organizers: Lei Xu, Harvard Univ. and Adam Krzyzak, Concordia Univ.
-
- Abstract:
- Feedforward neural networks are often used for function
- approximation, density estimation and pattern classification.
- These tasks are also the purposes of statistical regression
- methods. Some methods used in the literature of neural networks
- and the literature of statistical regression are same, some are
- different, and some have close relations. Recently, the
- connections between the methods in the two literatures have been
- explored from a number of aspects. E.g., (1) connecting feedforward
- nets to parametric statistical regression for theoretical studies
- about multilayer feedforward nets; (2) relating the
- performances of feedforward nets to the trade-off of bias and
- variances in nonparameter statistics. (3) connecting Radial Basis
- function nets to Nonparameter Kernal Regression to get several
- new theoretical results on approximation ability, convergence
- rate and receptive field size of Radial Basis Function networks;
- (4) using VC dimension to study the generalization ability of
- multilayer feedforward nets; (5) using other statistical methods
- such as projection pursuit, cross-validation, EM algorithm, CART,
- MARS for training feedforward nets. Not only in these mentioned
- aspects there are still many interesting and open issues to be
- further explored. But also, in the literature of statistical
- regression there are many other methods and theoretical results
- on both nonparametric regression and parameteric regression (e.g.,
- L1 kernal estimation, ..., etc).
-
- Invited presentations:
- Presentations will include arranged talks and submissions. Submis-
- sions can be sent to either of the two organizers by Email before
- Nov.15, 1992. Each submission can be an abstract of 200--400 words.
- =========================================================================
- Computational Models of Visual Attention
-
- Organizer: Pete Sandon, Dartmouth College
-
- Abstract:
- Visual attention refers to the process by which some part of the
- visual field is selected over other parts for preferential processing.
- The details of the attentional mechanism in humans has been the subject
- of much recent psychophysical experimentation.
- Along with the abundance of new data, a number of theories of attention
- have been proposed, some in the form of computational models
- simulated on computers.
- The goal of this workshop is to bring together computational modelers
- and experimentalists to evaluate the status of current theories
- and to identify the most promising avenues for improving
- understanding of the mechanisms and behavioral roles of visual
- attention.
-
- Invited presentations:
- Pete Sandon "The time course of selection"
- John Tsotsos "Inhibitory beam model of visual attention"
- Kyle Cave "Mapping the Allocation of Spatial Attention:
- Knowing Where Not to Look"
- Mike Mozer "A principle for unsupervised decomposition and hierarchical
- structuring of visual objects"
- Eric Lumer "On the interaction between perceptual grouping, object
- selection, and spatial orientation of attention"
- Steve Yantis "Mechanisms of human visual attention:
- Bottom-up and top-down influences"
- =========================================================================
- Comparison and Unification of Algorithms, Loss Functions
- and Complexity Measures for Learning
-
- Organizers: Isabelle Guyon, Michael Kearns and Esther Levin, AT&T Bell Labs
-
- Abstract:
- The purpose of the workshop is an attempt to clarify and unify the
- relationships
- between many well-studied learning algorithms, loss functions, and
- combinatorial
- and statistical measures of learning problem complexity.
-
- Many results investigating the principles underlying supervised learning from
- empirical observations have the following general flavor: first, a "general
- purpose" learning algorithm is chosen for study (for example, gradient descent
- or maximum a posteriori). Next, an appropriate loss function is selected, and
- the details of the learning model are specified (such as the mechanism
- generating
- the observations). The analysis results in a bound on the loss of the algorithm
- in terms of a "complexity measure" such as the Vapnik-Chervonenkis dimension
- or the statistical capacity.
-
- We hope that reviewing the literature with an explicit emphasis on comparisons
- between algorithms, loss functions and complexity measures will result in a
- deeper understanding of the similarities and differences of the many possible
- approaches to and analyses of supervised learning, and aid in extracting the
- common general principles underlying all of them. Significant gaps in our
- knowledge concerning these relationships will suggest new directions in
- research.
-
- Half of the available time has been reserved for discussion and informal
- presentations. We anticipate and encourage active audience participation.
- Each discussion period will begin by soliciting topics of interest from the
- participants for investigation. Thus, participants are strongly encouraged
- to think about issues they would like to see discussed and clarified prior
- to the workshop. All talks will be tutorial in nature.
-
- Invited presentations:
- Michael Kearns, Isabelle Guyon and Esther Levin:
- -Overview on loss functions
- -Overview on general purpose learning algorithms
- -Overview on complexity measures
- David Haussler: Overview on "Chinese menu" results
- =========================================================================
- Activity-Dependent Processes in Neural Development
-
- Organizer: Adina Roskies, Salk Institute
-
- Abstract: This workshop will focus on the role of activity in setting
- up neural architectures. Biological systems rely upon a variety of
- cues, both activity-dependent and independent, in establishing their
- architectures. Network architectures have traditionally been
- pre-specified, but it is ongoing construction of architectures may
- endow networks with more computational power than do static
- architectures. Biological issues such as the role of activity in
- development, the mechanisms by which it operates, and the type of
- activity necessary will be explored, as well as computational issues
- such as the computational value of such processes, the relation to
- hebbian learning, and constructivist algorithms.
-
- Invited presentations:
- General Overview (Adina Roskies)
- The role of NMDA in cortical development (Tony Bell)
- Optimality, local learning rules, and the emergence of function in a
- sensory processing network (Ralph Linsker)
- Mechanisms and models of neural development through rapid
- volume signals (Read Montague)
- The role of activity in cortical development and plasticity
- (Brad Schlaggar)
- Computational advantages of constructivist algorithms (Steve Quartz)
- Learning, development, and evolution (Rik Belew)
- =========================================================================
- DETERMINSTIC ANNEALING AND COMBINATORIAL OPTIMIZATION
-
- Organizer: Anand Rangarajan, Yale Univ.
-
- Abstract: Optimization problems defined on ``mixed variables'' (analog
- and digital) occur in a wide variety of connectionist applications.
- Recently, several advances have been made in deterministic annealing
- techniques for optimization. Deterministic annealing is a faster and
- more efficient alternative to simulated annealing. This workshop
- will focus on several of these new techniques (emerging in the last
- two years). Topics include improved elastic nets for the traveling salesman
- problem, new algorithms for graph matching, relationship between
- deterministic annealing algorithms and older, more conventional techniques,
- applications in early vision problems like surface reconstruction, internal
- generation of annealing schedules, etc.
-
- Invited presentations:
- Alan Yuille, Statistical Physics algorithms that converge
- Chien-Ping Lu, Competitive elastic nets for TSP
- Paul Stolorz, Recasting deterministic annealing as constrained
- optimization
- Davi Geiger, Surface reconstruction from uncertain data
- on images and stereo images.
- Anand Rangarajan, A new deterministic annealing algorithm for
- graph matching
- =========================================================================
- The Computational Neuron
-
- Organizer: Terry Sejnowski, Salk Institute (tsejnowski@ucsd.edu)
-
- Abstract:
- Neurons are complex dynamical systems. Nonlinear properties arise
- from voltage-sensitive ionic currents and synaptic conductances; branched
- dendrites provide a geometric substrata for synaptic integration and learning
- mechanisms. What can subthreshold nonlinearities in dendrites be used to
- compute? How do the time courses of ionic currents affect synaptic
- integration and Hebbian learning mechanisms? How are ionic channels in
- dendrites regulated? Why are there so many different types of neurons?
- These are a few of the issues that will we will be discussing. In addition to
- short scheduled presentations designed to stimulate discussion, we invite
- members of the audience to present one-viewgraph talks to introduce
- additional topics.
-
- Invited presentations:
- Larry Abbott - Neurons as dynamical systems.
- Tony Bell - Self-organization of ionic channels in neurons.
- Tom McKenna - Single neuron computation.
- Bart Mel - Computing capacity of dendrites.
- =========================================================================
- ROBOT LEARNING
-
- Organizers: Sebastian Thrun (CMU), Tom Mitchell (CMU), David Cohn (MIT)
-
- Abstract:
- Robot learning has grasped the attention of many researchers over the
- past few years. Previous robotics research has demonstrated the
- difficulty of manually encoding sufficiently accurate models of the
- robot and its environment to succeed at complex tasks. Recently a wide
- variety of learning techniques ranging from statistical calibration
- techniques to neural networks and reinforcement learning have been
- applied to problems of perception, modeling and control. Robot
- learning is characterized by sensor noise, control error, dynamically
- changing environments and the opportunity for learning by
- experimentation.
-
- This workshop will provide a forum for researchers active in the area
- of robot learning and related fields. It will include informal
- tutorials and presentations of recent results, given by experts in
- this field, as well as significant time for open discussion. Problems
- to be considered include: How can current learning robot techniques
- scale to more complex domains, characterized by massive sensor input,
- complex causal interactions, and long time scales? How can previously
- acquired knowledge accelerate subsequent learning? What
- representations are appropriate and how can they be learned?
-
- Invited speakers:
- Chris Atkeson
- Steve Hanson
- Satinder Singh
- Andrew W. Moore
- Richard Yee
- Andy Barto
- Tom Mitchell
- Mike Jordan
- Dean Pomerleau
- Steve Suddarth
- =========================================================================
- Connectionist Approaches to Symbol Grounding
-
- Organizers: Georg Dorffner, Univ. Vienna; Michael Gasser, Indiana Univ.
- Stevan Harnad, Princeton Univ.
-
- Abstract:
- In recent years, there has been increasing discomfort with the
- disembodied nature of symbols that is a hallmark of the symbolic
- paradigm in cognitive science and artificial intelligence and at the
- same time increasing interest in the potential offered by
- connectionist models to ``ground'' symbols.
- In ignoring the mechanisms by which their symbols get ``hooked up'' to
- sensory and motor processes, that is, the mechanisms by which
- intelligent systems develop categories, symbolists have missed out on
- what is not only one of the more challenging areas in cognitive
- science but, some would argue, the very heart of what cognition is about.
- This workshop will focus on issues in neural network based
- approaches to the grounding of symbols and symbol structures.
- In particular, connectionist models of categorisation and
- of label-category association will be discussed in the light of
- the symbol grounding problem.
-
- Invited presentations:
- "Grounding Symbols in the Analog World of Objects: Can Neural
- Nets Make the Connection?" Stevan Harnad, Princeton University
-
- "Learning Perceptually Grounded Lexical Semantics"
- Terry Regier, George Lakoff, Jerry Feldman, ICSI Berkeley
-
- T.B.A. Gary Cottrell, Univ. of California, San Diego
-
- "Learning Perceptual Dimensions" Michael Gasser, Indiana University
-
- "Symbols and External Embodiments - why Grounding has to Go
- Two Ways" Georg Dorffner, University of Vienna
-
- "Grounding Symbols on Conceptual Knowledge" Philippe Schyns, MIT
- =========================================================================
- Continuous Speech Recognition: Is there a connectionist advantage?
-
- Organizer: Michael Franzini (maf@cs.cmu.edu)
-
- Abstract:
- This workshop will address the following questions: How do neural
- networks compare to the alternative technologies available for
- speech recognition? What evidence is available to suggest that
- connectionism may lead to better speech recognition systems?
- What comparisons have been performed between connectionist and
- non-connectionist systems, and how ``fair'' are these comparis-
- ons? Which approaches to connectionist speech recognition have
- produced the best results, and which are likely to produce the
- best results in the future?
-
- Traditionally, the selection criteria for NIPS papers reflect a
- much greater emphasis on theoretical importance of work than on
- performance figures, despite the fact that recognition rate is
- one of the most important considerations for speech recognition
- researchers (and often is {\em the} most important factor in
- determining their financial support). For this reason, this
- workshop -- to be oriented more towards performance than metho-
- dology -- will be of interest to many NIPS participants.
-
- The issue of connectionist vs. HMM performance in speech recogni-
- tion is controversial in the speech recognition community. The
- validity of past comparisons is often disputed, as is the funda-
- mental value of neural networks. In this workshop, an attempt
- will be made to address this issue and the questions stated above
- by citing specific experimental results and by making arguments
- with a theoretical basis.
-
- Preliminary list of speakers:
- Ron Cole
- Uli Bodenhausen
- Hermann Hild
- =========================================================================
- Symbolic and Subsymbolic Information Processing in
- Biological Neural Circuits and Systems
-
- Organizer: Vasant Honavar (honavar@iastate.edu)
-
- Abstract:
- Traditional information processing models in cognitive psychology
- which became popular with the advent of the serial computer tended
- to view cognition as discrete, sequential symbol processing.
- Neural network or connectionist models offer an alternative paradigm
- for modelling cognitive phenomena that relies on continuous, parallel
- subsymbolic processing. Biological systems appear to combine both
- discrete as well as continuous, sequential as well as parallel,
- symbolic as well as subsymbolic information processing in various
- forms at different levels of organization. The flow of neurotransmitter
- molecules and of photons into receptors is quantal; the depolarization
- and hyperpolarization of neuron membranes is analog; the genetic code
- and the decoding processes appear to be digital; global interactions
- mediated by neurotransmitters and slow waves appear to be both analog and
- digital.
-
- The purpose of this workshop is to bring together interested
- computer scientists, neuroscientists, psychologists, mathematicians,
- engineers, physicists and systems theorists to examine and discuss
- specific examples as well as general principles (to the extent they can
- be gleaned from our current state of knowledge) of information processing
- at various levels of organization in biological neural systems.
-
- The workshop will consist of several short presentations by participants
- There will be ample time for informal presentations and discussion centering
- around a number of key topics such as:
-
- * Computational aspects of symbolic v/s subsymbolic information processing
- * Coordination and control structures and processes in neural systems
- * Encoding and decoding structures and processes in neural systems
- * Generative structures and processes in neural systems
- * Suitability of particular paradigms for modelling specific phenomena
- * Software requirements for modelling biological neural systems
-
- Invited presentations: TBA
- Those interested in giving a presentation should write to honavar@iastate.edu
- =========================================================================
- Computational Issues in Neural Network Training
-
- Organizers: Scott Markel and Roger Crane, Sarnoff Research
-
- Abstract:
- Many of the best practical neural network training results are report-
- ed by researchers who use variants of back-propagation and/or develop
- their own algorithms. Few results are obtained by using classical nu-
- merical optimization methods although such methods can be used effec-
- tively for many practical applications. Many competent researchers
- have concluded, based on their own experience, that classical methods
- have little value in solving real problems. However, use of the best
- commercially available implementations of such algorithms can help in
- understanding numerical and computational issues that arise in all
- training methods. Also, classical methods can be used effectively to
- solve practical problems. Examples of numerical issues that are ap-
- propriate to discuss in this workshop include: convergence rates; lo-
- cal minima; selection of starting points; conditioning (for higher
- order methods); characterization of the error surface; ... .
-
- Ample time will reserved for discussion and informal presentations. We
- will encourage lively audience participation.
- =========================================================================
- Real Applications of Real Biological Circuits
-
- Organizers: Richard Granger, UC Irvine and Jim Schwaber, Du Pont
-
- Abstract:
- The architectures, performance rules and learning rules of most artificial
- neural networks are at odds with the anatomy and physiology of real
- biological neural circuitry. For example, mammalian telencephelon
- (forebrain) is characterized by extremely sparse connectivity (~1-5%),
- almost entirely lacks dense recurrent connections, and has extensive lateral
- local circuit connections; inhibition is delayed-onset and relatively
- long-lasting (100s of milliseconds) compared to rapid-onset brief excitation
- (10s of milliseconds), and they are not interchangeable. Excitatory
- connections learn, but there is very little evidence for plasticity in
- inhibitory connections. Real synaptic plasticity rules are sensitive to
- temporal information, are not Hebbian, and do not contain "supervision"
- signals in any form related to those common in ANNs.
-
- These discrepancies between natural and artificial NNs raise the question of
- whether such biological details are largely extraneous to the behavioral and
- computational utility of neural circuitry, or whether such properties may
- yield novel rules that confer useful computational abilities to networks
- that use them. In this workshop we will explicitly analyze the power and
- utility of a range of novel algorithms derived from detailed biology, and
- illustrate specific industrial applicatons of these algorithms in the fields
- of process control and signal processing.
-
- It is anticipated that these issues will raise controversy, and half of
- the workshop will be dedicated to open discussion.
-
- Preliminary list of speakers:
- Jim Schwaber, DuPont
- Bbatunde Ogunnaike, DuPont
- Richard Granger, University of California, Irvine
- John Hopfield, Cal Tech
- =========================================================================
- Recognizing Unconstrained Handwritten Script
-
- Organizers: Krishna Nathan, IBM and James A. Pittman, MCC
-
- Abstract:
- Neural networks have given new life to an old research topic, the
- segmentation and recognition of on-line handwritten script.
- Isolated handprinted character recognition systems are moving from
- research to product development, and researchers have moved
- forward to integrated segmentation and recognition projects.
- However, the 'real world' problem is best described as one of
- unconstrained handwriting recognition (often on-line) since it
- includes both printed and cursive styles -- often within the same
- word.
-
- The workshop will provide a forum for participants to share ideas on
- preprocessing, segmentation, and recognition techniques, and the use
- of context to improve the performance of online handwriting recognition
- systems. We will also discuss issues related to what constitutes
- acceptable recognition performance. The collection of training and
- test data will also be addressed.
- =========================================================================
- Time Series Analysis and Predic....
-
- Organizers: John Moody, Oregon Grad. Inst., Mike Mozer, Univ. of
- Colorado and Andreas Weigend, Xerox PARC
-
- Abstract:
- Several new techniques are now being applied to the problem of predicting
- the future behavior of a temporal sequence and deducing properties of the
- system that produced the time series. We will discuss both connectionist
- and non-connectionist techniques. Issues include algorithms and
- architectures, model selection, performance measures, iterated vs long
- term prediction, robust prediction and estimation, the number of degrees of
- freedom of the system, how much noise is in the data, whether it is chaotic
- or not, how the error grows with prediction time, detection and classification
- of signals in noise, etc. Half the available time has been reserved for
- discussion and informal presentations. We will encourage lively audience
- participation.
-
- Invited presentations:
- Classical and Non-Neural Approaches: Advantages and Problems.
- (John Moody)
- Connectionist Approaches: Problems and Interpretations. (Mike Mozer)
- Beyond Prediction: What can we learn about the system? (Andreas Weigend)
- Physiological Time Series Modeling (Volker Tresp)
- Financial Forecasting (William Finnoff / Georg Zimmerman)
- FIR Networks (Eric Wan)
- Dimension Estimation (Fernando Pineda)
- =========================================================================
- Applications of VLSI Neural Networks
-
- Organizer: Dave Andes, Naval Air Warfare Center
-
- Abstract: This workshop will provide a forum for discussion
- of the problems and opportunities for neural net hardware
- systems which solve real problems under real time and space
- constraints. Some of the most difficult requirements for
- systems of this type come, not surprisingly, from the military.
- Several examples of these problems and VLSI solutions will be
- discussed in this working group. Examples from outside the
- military will also be discussed. At least half the time will
- be devoted to open discussion of the issues raised by the
- experiences of those who have already applied VLSI based ANN
- techniques to real world problems.
-
- Preliminary list of speakers:
- Bill Camp, IBM Federal Systems
- Lynn Kern, Naval Air Warfare Center
- Chuck Glover, Oak Ridge National Lab
- Dave Andes, Naval Air Warfare Center
-
- \enddata{text822, 0}
-
-
- ------------------------------
-
- End of Neuron Digest [Volume 10 Issue 14]
- *****************************************
-